99 research outputs found

    Building Footprint Generation Using Improved Generative Adversarial Networks

    Get PDF
    Building footprint information is an essential ingredient for 3-D reconstruction of urban models. The automatic generation of building footprints from satellite images presents a considerable challenge due to the complexity of building shapes. In this work, we have proposed improved generative adversarial networks (GANs) for the automatic generation of building footprints from satellite images. We used a conditional GAN with a cost function derived from the Wasserstein distance and added a gradient penalty term. The achieved results indicated that the proposed method can significantly improve the quality of building footprint generation compared to conditional generative adversarial networks, the U-Net, and other networks. In addition, our method nearly removes all hyperparameters tuning.Comment: 5 page

    Non-Local Compressive Sensing Based SAR Tomography

    Get PDF
    Tomographic SAR (TomoSAR) inversion of urban areas is an inherently sparse reconstruction problem and, hence, can be solved using compressive sensing (CS) algorithms. This paper proposes solutions for two notorious problems in this field: 1) TomoSAR requires a high number of data sets, which makes the technique expensive. However, it can be shown that the number of acquisitions and the signal-to-noise ratio (SNR) can be traded off against each other, because it is asymptotically only the product of the number of acquisitions and SNR that determines the reconstruction quality. We propose to increase SNR by integrating non-local estimation into the inversion and show that a reasonable reconstruction of buildings from only seven interferograms is feasible. 2) CS-based inversion is computationally expensive and therefore barely suitable for large-scale applications. We introduce a new fast and accurate algorithm for solving the non-local L1-L2-minimization problem, central to CS-based reconstruction algorithms. The applicability of the algorithm is demonstrated using simulated data and TerraSAR-X high-resolution spotlight images over an area in Munich, Germany.Comment: 10 page

    A fast and accurate basis pursuit denoising algorithm with application to super-resolving tomographic SAR

    Get PDF
    L1L_1 regularization is used for finding sparse solutions to an underdetermined linear system. As sparse signals are widely expected in remote sensing, this type of regularization scheme and its extensions have been widely employed in many remote sensing problems, such as image fusion, target detection, image super-resolution, and others and have led to promising results. However, solving such sparse reconstruction problems is computationally expensive and has limitations in its practical use. In this paper, we proposed a novel efficient algorithm for solving the complex-valued L1L_1 regularized least squares problem. Taking the high-dimensional tomographic synthetic aperture radar (TomoSAR) as a practical example, we carried out extensive experiments, both with simulation data and real data, to demonstrate that the proposed approach can retain the accuracy of second order methods while dramatically speeding up the processing by one or two orders. Although we have chosen TomoSAR as the example, the proposed method can be generally applied to any spectral estimation problems.Comment: 11 pages, IEEE Transactions on Geoscience and Remote Sensin

    Seismic Performance of Hybrid Fiber Reinforced Polymer-Concrete Pier Columns

    Get PDF
    As part of a multi-university research program funded by NSF, a comprehensive experimental and analytical study of seismic behavior of hybrid fiber reinforced polymer (FRP)-concrete column is presented in this dissertation. Experimental investigation includes cyclic tests of six large-scale concrete-filled FRP tube (CFFT) and RC columns followed by monotonic flexural tests, a nondestructive evaluation of damage using ultrasonic pulse velocity in between the two test sets and tension tests of sixty-five FRP coupons. Two analytical models using ANSYS and OpenSees were developed and favorably verified against both cyclic and monotonic flexural tests. The results of the two methods were compared. A parametric study was also carried out to investigate the effect of three main parameters on primary seismic response measures. The responses of typical CFFT columns to three representative earthquake records were also investigated. The study shows that only specimens with carbon FRP cracked, whereas specimens with glass or hybrid FRP did not show any visible cracks throughout cyclic tests. Further monotonic flexural tests showed that carbon specimens both experienced flexural cracks in tension and crumpling in compression. Glass or hybrid specimens, on the other hand, all showed local buckling of FRP tubes. Compared with conventional RC columns, CFFT column possesses higher flexural strength and energy dissipation with an extended plastic hinge region. Among all CFFT columns, the hybrid lay-up demonstrated the highest flexural strength and initial stiffness, mainly because of its high reinforcement index and FRP/concrete stiffness ratio, respectively. Moreover, at the same drift ratio, the hybrid lay-up was also considered as the best in term of energy dissipation. Specimens with glassfiber tubes, on the other hand, exhibited the highest ductility due to better flexibility of glass FRP composites. Furthermore, ductility of CFFTs showed a strong correlation with the rupture strain of FRP. Parametric study further showed that different FRP architecture and rebar types may lead to different failure modes for CFFT columns. Transient analysis of strong ground motions showed that the column with off-axis nonlinear filament-wound glass FRP tube exhibited a superior seismic performance to all other CFFTs. Moreover, higher FRP reinforcement ratios may lead to a brittle system failure, while a well-engineered FRP reinforcement configuration may significantly enhance the seismic performance of CFFT columns

    Self-supervised Domain-agnostic Domain Adaptation for Satellite Images

    Full text link
    Domain shift caused by, e.g., different geographical regions or acquisition conditions is a common issue in machine learning for global scale satellite image processing. A promising method to address this problem is domain adaptation, where the training and the testing datasets are split into two or multiple domains according to their distributions, and an adaptation method is applied to improve the generalizability of the model on the testing dataset. However, defining the domain to which each satellite image belongs is not trivial, especially under large-scale multi-temporal and multi-sensory scenarios, where a single image mosaic could be generated from multiple data sources. In this paper, we propose an self-supervised domain-agnostic domain adaptation (SS(DA)2) method to perform domain adaptation without such a domain definition. To achieve this, we first design a contrastive generative adversarial loss to train a generative network to perform image-to-image translation between any two satellite image patches. Then, we improve the generalizability of the downstream models by augmenting the training data with different testing spectral characteristics. The experimental results on public benchmarks verify the effectiveness of SS(DA)2

    Influencing Factors of Catering O2O Customer Experience: An Approach Integrating Big Data Analytics with Grounded Theory

    Get PDF
    In the era of digital economy, catering O2O is developing rapidly. Catering O2O (catering online to offline), namely catering takeout in the paper, means that customers place an order through online ordering platform, and delivery persons deliver the food provided by catering enterprises offline. Catering O2O conforms to the trend of the digital economy era, but exposes a variety of problems, such as lower feedback rate of the platform, lower timeliness of acceptance and handling, lower customer feedback satisfaction, and poorer customer experience. As China\u27s leading e-commerce platform for life services, Meituan won the rating of not recommending to place an order in the report of "2020 China E-commerce User Experience and Complaint Monitoring". In order to improve customer experience and service satisfaction of catering O2O, this paper takes Meituan takeout as an example, integrates big data analytics and grounded theory to explore influencing factors of catering O2O customer experience. With the big data analytics method, the main influencing factors are obtained from 54250 customer reviews, and then the grounded theory method is used to conduct in-depth analysis on negative reviews, and influencing factors of O2O customer experience are verified and confirmed. The results show that the main influencing factors of catering O2O customer experience are catering food quality and delivery service quality and after-sale service quality. Catering food quality and delivery service quality have a significant impact on customer experience. Finally, from perspectives of catering O2O platforms and enterprises, the paper obtains management implications as follows: Catering O2O platforms should attach great importance on the service of contact points in distribution link, strengthen the last-mile delivery service quality, and improve the supervision and feedback mechanism; catering O2O enterprises should ensure the quality, portion and package of catering food, so as to improve customer experience and win electronic word-of-mouth and customer satisfaction

    γ\boldsymbol{\gamma}-Net: Superresolving SAR Tomographic Inversion via Deep Learning

    Get PDF
    Synthetic aperture radar tomography (TomoSAR) has been extensively employed in 3-D reconstruction in dense urban areas using high-resolution SAR acquisitions. Compressive sensing (CS)-based algorithms are generally considered as the state of the art in super-resolving TomoSAR, in particular in the single look case. This superior performance comes at the cost of extra computational burdens, because of the sparse reconstruction, which cannot be solved analytically and we need to employ computationally expensive iterative solvers. In this paper, we propose a novel deep learning-based super-resolving TomoSAR inversion approach, γ\boldsymbol{\gamma}-Net, to tackle this challenge. γ\boldsymbol{\gamma}-Net adopts advanced complex-valued learned iterative shrinkage thresholding algorithm (CV-LISTA) to mimic the iterative optimization step in sparse reconstruction. Simulations show the height estimate from a well-trained γ\boldsymbol{\gamma}-Net approaches the Cram\'er-Rao lower bound while improving the computational efficiency by 1 to 2 orders of magnitude comparing to the first-order CS-based methods. It also shows no degradation in the super-resolution power comparing to the state-of-the-art second-order TomoSAR solvers, which are much more computationally expensive than the first-order methods. Specifically, γ\boldsymbol{\gamma}-Net reaches more than 90%90\% detection rate in moderate super-resolving cases at 25 measurements at 6dB SNR. Moreover, simulation at limited baselines demonstrates that the proposed algorithm outperforms the second-order CS-based method by a fair margin. Test on real TerraSAR-X data with just 6 interferograms also shows high-quality 3-D reconstruction with high-density detected double scatterers

    Few-shot Object Detection in Remote Sensing: Lifting the Curse of Incompletely Annotated Novel Objects

    Full text link
    Object detection is an essential and fundamental task in computer vision and satellite image processing. Existing deep learning methods have achieved impressive performance thanks to the availability of large-scale annotated datasets. Yet, in real-world applications the availability of labels is limited. In this context, few-shot object detection (FSOD) has emerged as a promising direction, which aims at enabling the model to detect novel objects with only few of them annotated. However, many existing FSOD algorithms overlook a critical issue: when an input image contains multiple novel objects and only a subset of them are annotated, the unlabeled objects will be considered as background during training. This can cause confusions and severely impact the model's ability to recall novel objects. To address this issue, we propose a self-training-based FSOD (ST-FSOD) approach, which incorporates the self-training mechanism into the few-shot fine-tuning process. ST-FSOD aims to enable the discovery of novel objects that are not annotated, and take them into account during training. On the one hand, we devise a two-branch region proposal networks (RPN) to separate the proposal extraction of base and novel objects, On another hand, we incorporate the student-teacher mechanism into RPN and the region of interest (RoI) head to include those highly confident yet unlabeled targets as pseudo labels. Experimental results demonstrate that our proposed method outperforms the state-of-the-art in various FSOD settings by a large margin. The codes will be publicly available at https://github.com/zhu-xlab/ST-FSOD

    HTC-DC Net: Monocular Height Estimation from Single Remote Sensing Images

    Full text link
    3D geo-information is of great significance for understanding the living environment; however, 3D perception from remote sensing data, especially on a large scale, is restricted. To tackle this problem, we propose a method for monocular height estimation from optical imagery, which is currently one of the richest sources of remote sensing data. As an ill-posed problem, monocular height estimation requires well-designed networks for enhanced representations to improve performance. Moreover, the distribution of height values is long-tailed with the low-height pixels, e.g., the background, as the head, and thus trained networks are usually biased and tend to underestimate building heights. To solve the problems, instead of formalizing the problem as a regression task, we propose HTC-DC Net following the classification-regression paradigm, with the head-tail cut (HTC) and the distribution-based constraints (DCs) as the main contributions. HTC-DC Net is composed of the backbone network as the feature extractor, the HTC-AdaBins module, and the hybrid regression process. The HTC-AdaBins module serves as the classification phase to determine bins adaptive to each input image. It is equipped with a vision transformer encoder to incorporate local context with holistic information and involves an HTC to address the long-tailed problem in monocular height estimation for balancing the performances of foreground and background pixels. The hybrid regression process does the regression via the smoothing of bins from the classification phase, which is trained via DCs. The proposed network is tested on three datasets of different resolutions, namely ISPRS Vaihingen (0.09 m), DFC19 (1.3 m) and GBH (3 m). Experimental results show the superiority of the proposed network over existing methods by large margins. Extensive ablation studies demonstrate the effectiveness of each design component.Comment: 18 pages, 10 figures, submitted to IEEE Transactions on Geoscience and Remote Sensin

    HyperLISTA-ABT: An Ultra-light Unfolded Network for Accurate Multi-component Differential Tomographic SAR Inversion

    Full text link
    Deep neural networks based on unrolled iterative algorithms have achieved remarkable success in sparse reconstruction applications, such as synthetic aperture radar (SAR) tomographic inversion (TomoSAR). However, the currently available deep learning-based TomoSAR algorithms are limited to three-dimensional (3D) reconstruction. The extension of deep learning-based algorithms to four-dimensional (4D) imaging, i.e., differential TomoSAR (D-TomoSAR) applications, is impeded mainly due to the high-dimensional weight matrices required by the network designed for D-TomoSAR inversion, which typically contain millions of freely trainable parameters. Learning such huge number of weights requires an enormous number of training samples, resulting in a large memory burden and excessive time consumption. To tackle this issue, we propose an efficient and accurate algorithm called HyperLISTA-ABT. The weights in HyperLISTA-ABT are determined in an analytical way according to a minimum coherence criterion, trimming the model down to an ultra-light one with only three hyperparameters. Additionally, HyperLISTA-ABT improves the global thresholding by utilizing an adaptive blockwise thresholding scheme, which applies block-coordinate techniques and conducts thresholding in local blocks, so that weak expressions and local features can be retained in the shrinkage step layer by layer. Simulations were performed and demonstrated the effectiveness of our approach, showing that HyperLISTA-ABT achieves superior computational efficiency and with no significant performance degradation compared to state-of-the-art methods. Real data experiments showed that a high-quality 4D point cloud could be reconstructed over a large area by the proposed HyperLISTA-ABT with affordable computational resources and in a fast time
    corecore